177 research outputs found

    Integrated Reservoir Management under Stochastic Conditions

    Get PDF
    Economic optimization, Lake levels, Marketed and non-marketed water uses, Non-linear programming, Recreational benefits, Reservoir management, Stochastic inflows, Value of a visitor day, Environmental Economics and Policy, International Development, Land Economics/Use, Production Economics, Productivity Analysis, Public Economics, Resource /Energy Economics and Policy, Risk and Uncertainty,

    Optimal Allocation of Reservoir Water

    Get PDF
    The purpose of this paper is to determine the optimal allocation of reservoir water among consumptive and non-consumptive uses. A non-linear mathematical programming model is developed to optimally allocate Lake Tenkiller water among competing uses that maximize the net social benefit. A mass balance is used to determine the level and volume of water in the lake. This paper examines the effect of water management on lake resources when recreational values are and are not included as control variables in the optimization process. Results show that maintaining the lake level to the ‘normal lake level’ of 632 feet during the summer months generates more recreational benefit rather than reducing the lake level by releasing water for hydro power generation.consumptive and non-consumptive use, mass balance equation, non-linear mathematical programming, optimization, recreational uses, water allocation, Resource /Energy Economics and Policy,

    Widespread recombination, reassortment, and transmission of unbalanced compound viral genotypes in natural arenavirus infections.

    Get PDF
    Arenaviruses are one of the largest families of human hemorrhagic fever viruses and are known to infect both mammals and snakes. Arenaviruses package a large (L) and small (S) genome segment in their virions. For segmented RNA viruses like these, novel genotypes can be generated through mutation, recombination, and reassortment. Although it is believed that an ancient recombination event led to the emergence of a new lineage of mammalian arenaviruses, neither recombination nor reassortment has been definitively documented in natural arenavirus infections. Here, we used metagenomic sequencing to survey the viral diversity present in captive arenavirus-infected snakes. From 48 infected animals, we determined the complete or near complete sequence of 210 genome segments that grouped into 23 L and 11 S genotypes. The majority of snakes were multiply infected, with up to 4 distinct S and 11 distinct L segment genotypes in individual animals. This S/L imbalance was typical: in all cases intrahost L segment genotypes outnumbered S genotypes, and a particular S segment genotype dominated in individual animals and at a population level. We corroborated sequencing results by qRT-PCR and virus isolation, and isolates replicated as ensembles in culture. Numerous instances of recombination and reassortment were detected, including recombinant segments with unusual organizations featuring 2 intergenic regions and superfluous content, which were capable of stable replication and transmission despite their atypical structures. Overall, this represents intrahost diversity of an extent and form that goes well beyond what has been observed for arenaviruses or for viruses in general. This diversity can be plausibly attributed to the captive intermingling of sub-clinically infected wild-caught snakes. Thus, beyond providing a unique opportunity to study arenavirus evolution and adaptation, these findings allow the investigation of unintended anthropogenic impacts on viral ecology, diversity, and disease potential

    The Canada-UK Deep Submillimetre Survey: The Survey of the 14-hour field

    Full text link
    We have used SCUBA to survey an area of 50 square arcmin, detecting 19 sources down to a 3sigma sensitivity limit of 3.5 mJy at 850 microns. We have used Monte-Carlo simulations to assess the effect of source confusion and noise on the SCUBA fluxes and positions, finding that the fluxes of sources in the SCUBA surveys are significantly biased upwards and that the fraction of the 850 micron background that has been resolved by SCUBA has been overestimated. The radio/submillmetre flux ratios imply that the dust in these galaxies is being heated by young stars rather than AGN. We have used simple evolution models based on our parallel SCUBA survey of the local universe to address the major questions about the SCUBA sources: (1) what fraction of the star formation at high redshift is hidden by dust? (2) Does the submillimetre luminosity density reach a maximum at some redshift? (3) If the SCUBA sources are proto-ellipticals, when exactly did ellipticals form? However, we show that the observations are not yet good enough for definitive answers to these questions. There are, for example, acceptable models in which 10 times as much high-redshift star formation is hidden by dust as is seen at optical wavelengths, but also acceptable ones in which the amount of hidden star formation is less than that seen optically. There are acceptable models in which very little star formation occurred before a redshift of three (as might be expected in models of hierarchical galaxy formation), but also ones in which 30% of the stars have formed by this redshift. The key to answering these questions are measurements of the dust temperatures and redshifts of the SCUBA sources.Comment: 41 pages (latex), 17 postscript figures, to appear in the November issue of the Astronomical Journa

    Improving the normalization of complex interventions: measure development based on normalization process theory (NoMAD): study protocol

    Get PDF
    <b>Background</b> Understanding implementation processes is key to ensuring that complex interventions in healthcare are taken up in practice and thus maximize intended benefits for service provision and (ultimately) care to patients. Normalization Process Theory (NPT) provides a framework for understanding how a new intervention becomes part of normal practice. This study aims to develop and validate simple generic tools derived from NPT, to be used to improve the implementation of complex healthcare interventions.<p></p> <b>Objectives</b> The objectives of this study are to: develop a set of NPT-based measures and formatively evaluate their use for identifying implementation problems and monitoring progress; conduct preliminary evaluation of these measures across a range of interventions and contexts, and identify factors that affect this process; explore the utility of these measures for predicting outcomes; and develop an online users’ manual for the measures.<p></p> <b>Methods</b> A combination of qualitative (workshops, item development, user feedback, cognitive interviews) and quantitative (survey) methods will be used to develop NPT measures, and test the utility of the measures in six healthcare intervention settings.<p></p> <b>Discussion</b> The measures developed in the study will be available for use by those involved in planning, implementing, and evaluating complex interventions in healthcare and have the potential to enhance the chances of their implementation, leading to sustained changes in working practices

    Heterogeneity in periodontitis prevalence in the Hispanic Community Health Study/Study of Latinos

    Get PDF
    To examine acculturation and established risk factors in explaining variation in periodontitis prevalence among Hispanic/Latino subgroups

    Aptamer-based multiplexed proteomic technology for biomarker discovery

    Get PDF
    Interrogation of the human proteome in a highly multiplexed and efficient manner remains a coveted and challenging goal in biology. We present a new aptamer-based proteomic technology for biomarker discovery capable of simultaneously measuring thousands of proteins from small sample volumes (15 [mu]L of serum or plasma). Our current assay allows us to measure ~800 proteins with very low limits of detection (1 pM average), 7 logs of overall dynamic range, and 5% average coefficient of variation. This technology is enabled by a new generation of aptamers that contain chemically modified nucleotides, which greatly expand the physicochemical diversity of the large randomized nucleic acid libraries from which the aptamers are selected. Proteins in complex matrices such as plasma are measured with a process that transforms a signature of protein concentrations into a corresponding DNA aptamer concentration signature, which is then quantified with a DNA microarray. In essence, our assay takes advantage of the dual nature of aptamers as both folded binding entities with defined shapes and unique sequences recognizable by specific hybridization probes. To demonstrate the utility of our proteomics biomarker discovery technology, we applied it to a clinical study of chronic kidney disease (CKD). We identified two well known CKD biomarkers as well as an additional 58 potential CKD biomarkers. These results demonstrate the potential utility of our technology to discover unique protein signatures characteristic of various disease states. More generally, we describe a versatile and powerful tool that allows large-scale comparison of proteome profiles among discrete populations. This unbiased and highly multiplexed search engine will enable the discovery of novel biomarkers in a manner that is unencumbered by our incomplete knowledge of biology, thereby helping to advance the next generation of evidence-based medicine
    corecore